A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems

نویسندگان

چکیده

This paper analyzes the generalization error of two-layer neural networks for computing ground state Schrödinger operator on add-dimensional hypercube with Neumann boundary condition. We prove that convergence rate is independent dimensiond, under a priori assumption lies in spectral Barron space. verify such by proving new regularity estimate The latter achieved fixed point argument based Krein-Rutman theorem.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

High-dimensional dynamics of generalization error in neural networks

We perform an average case analysis of the generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant “high-dimensional” regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the gener...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Solving Eigenvalue Problems on Networks of Processors

In recent times the work on networks of processors has become very important, due to the low cost and the availability of these systems. This is why it is interesting to study algorithms on networks of processors. In this paper we study on networks of processors different Eigenvalue Solvers. In particular, the Power method, deflation, Givens algorithm, Davidson methods and Jacobi methods are an...

متن کامل

Generalization in a two-layer neural network.

Learning of a fully connected two-layer neural networks with N input nodes, M hidden nodes and a single output node is studied using the annealed approximation. We study the generalization curve, i.e. the average generalization error as a function of the number of the examples. When the number of examples is the order of N, the generalization error is rapidly decreasing and the system is in a p...

متن کامل

Solving Linear Semi-Infinite Programming Problems Using Recurrent Neural Networks

‎Linear semi-infinite programming problem is an important class of optimization problems which deals with infinite constraints‎. ‎In this paper‎, ‎to solve this problem‎, ‎we combine a discretization method and a neural network method‎. ‎By a simple discretization of the infinite constraints,we convert the linear semi-infinite programming problem into linear programming problem‎. ‎Then‎, ‎we use...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Communications of the American Mathematical Society

سال: 2022

ISSN: ['2692-3688']

DOI: https://doi.org/10.1090/cams/5